The potential sensitivity of the results of the modelling exercise to the definition of the spatial units used is commonly referred to as the modifiable areal unit problem (MAUP) or the zone definition problem (ZDP). Reviews of the general problem are provided elsewhere [36,17] and there have been investigations, although rather inconclusive, of the problem in other types of spatial modelling. [p. 27]
Contemporary spatial theories have led to the emergence of two major schools of analytical thought: the macroscopic school based upon probability arguments and entropy-maximising formulations [53] and the microscopic one corresponding to a behavioural or utility-theoretic approach (for an overview see [3]). [p. 34]
GIS offers urban modelers a data structure and a set of data manipulation tools through which they can explore the spatial processes inherent in urban activities. What are spatial processes, and how can they be measured?
Spatial processes are those processes by which activities at one location affect or are affected by activities at another location. Haining [23] pp. 24-26 and Krieger [27] have identified four types of spatial process that arise in urban activities:
These four processes are not necessarily mutually exclusive: they may occur simultaneously or even in opposition to each other.
To model spatial processes one must collect spatial data, or at least collect data that includes spatial attributes. Traditional data collection and representation methods, unfortunately, tend to obscure or distort the spatial nature of attributes. The practice of collecting socio-economic data at an aggregated zonal level, for example, treats all events within a particular zone as spatially homogeneous. This tends to minimize intrazonal differences while exaggerating interzonal ones. The representation of data in two-dimensional tables or matrices—another common practice in urban modeling—often requires the casting aside of spatial characteristics. Once lost, such characteristics are difficult to reintroduce.
Advances in GIS data representation provide a technical basis for overcoming some of these difficulties, but to take full advantage of the richness of spatial information represented in GIS, geographers and modelers must rethink how they extract spatial measurements from maps and other sources of spatial information. This section discusses how to use existing GIS analysis functions to operationalize four types of spatial measurement [...]:
[pp. 64-66]
Hedonic price models are used to decompose the transaction price of a composite bundle of goods or services (such as housing) into its component or implicit prices [45]. Hedonic models are estimated statistically, usually using some form of linear or non-linear regression. When applied to housing markets, the dependent variable is usually the observed transaction price. Typical explanatory variables include (1) the physical attributes of the property (for example, lot size), (2) supply and demand conditions in the property market (for example, vacancy rates), (3) financial characteristics of the buyer and/or seller, and (4) locational or spatial attributes of the property. [p. 67]
In Alameda County, for every meter closer a home was to the nearest BART station (measured along the street network), its 1990 sales price rose by US$2.29. For every meter it was closer to a freeway interchange, the price declined by US$2.80. In Contra Costa county, the respective price changes were +US$1.96 and -US$3.41. Proximity to the BART line/freeway (within 300m) had no statistical effect.
The current limitations of GIS in providing a dynamic representation of spatial phenomena at a micro level is only one obstacle to this [spatial dynamics modelling] development. The major challenge is to formulate a conceptual framework and to integrate theories of individual actors' behaviour, micro-level interaction and space/time constraints in order to model spatial micro-level dynamics. Another difficulty is to generate and analyse empirical data of micro processes in order to specify and calibrate dynamic spatial micro models. [p. 144]
The conceptual ideas behind the microanalytic modelling approach were originally developed by Guy Orcutt in the 1950s [37]. During the following years he and three other scholars—Greenberger, Korbel and Rivlin—tried to put these ideas into practice by developing an operational model. Four years later, in 1961, they issued a book in which the results of their efforts were presented [39]. From there on many microanalytic models have been designed and executed. [p. 145]
The central feature of the microanalytic approach is the identification and representation of individual actors in the socio-economic system and the way in which their behaviour changes over time. The modelled decision-making units might be nidividuals, households, firms, banks, corporations and so on. The shift of focus, from sectors of the economy to individual decision-making units is the basis of all microsimulation work. Knowledge about individual behaviour, other actors and decision-making units is integrated into the model, and the consequences of many individuals' behaviour or responses to external influences are explored [28]. Major advantages of microsimulation models are the possibility of incorporating individual behaviour and micro processes in the models and to apply theories of individual behaviour. Moreover, the hetereogeneity of information can be fully represented in the model and maintained during simulation. The output will consequently contain a great variety of information about general and specific conditions at the micro level, information that can be easily aggregated to the levels suitable for answering research and applied questions. This facilitates a detailed analysis of micro processes or sequences of individuals' actions and provides opportunities for a more thorough understanding of the mechanisms behind the macro processes and of the consequences at aggregate or disaggregate levels.
There are limitations to the approach. Irrespective of the sophistication of the method, an appropriate data set is required to provide valid projections. Attributes of the decision units relevant to the problem in question are needed. Since the objective of many microsimulation analyses is to study social processes, the requirements of the data set go beyond a single cross-section. It should rather be based on frequent observations at several points in time in order to facilitate the estimation of transition probabilities based on dynamically tested behavioural hypotheses. This kind of data is rarely available which has compelled researchers to turn to alternatives such as surveys/samples and synthetic data. In most countries the population census is the only source of micro data. The methodology of synthetic sampling is aimed at linking different series of separate tabulations by means of the conditional probability analysis. The obtained set of probabilities is used in the next stage to create a sampe of micro entities which is the foundation for microsimulation modelling (see [8] for a through presentation of the principles).
Another problem related to microsimulation is connected to the immense need of computation capacity and, in turn, hardware and staff resources [28]. These types of model tend to become rather voluminous and engage numbers of researchers with different specialities. This is due not only to the ambitious objectives of the model but also to the interdisciplinary character of the modelling approach. However, it should be stressed that microsimulation models and traditional models have quite different `size functions' in terms of data storage and computation. Microsimulation models may have a larger initial size threshold but, as more variables are incorporated, they grow additively, whereas aggregate models increase multiplicitavely.
Microanalytic models are usually divided into two main categories: static and dynamic. Static models contain no temporal dimension. They cannot, for example, be used for analysing changes in demography or the indirect impact of policy changes outside the directly targeted behaviour or for long-term distributional effects. [...] At the policy level they are designed to answer questions such as: what are the (short-term equilibrium) effects on income distribution induced by changes within the welfare system? The major advantage of dynamic micro models is their capability to represent interdependent processes by ageing the micro units, i.e. by not having to rely implicitly on equilibrium processes without empirical justification.
Microsimulation models are also divided into deterministic and stochastic models. The outcomes of deterministic models will be the same in every simulation, as the defined parameters provide fully determined relationships. In some cases, where there is a rich set of information on the micro units and where there are strong structural constraints, i.e. tax liabilities or subsidies, it may be advisable to use a deterministic modelling approach. However, in most cases the major problem is lack of knowledge. One possible way to cope with uncertainty is to treat it as random noise. In a stochastic model the outcome of different simulations will vary and make it possible to determine the impact of systematic parameter changes compared with the level of variation due to random processes. The random component can be justified in two ways:
[pp. 144-146]